Bayesian Neural Network Language Modeling for Speech Recognition

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Improved Bayesian Training for Context-Dependent Modeling in Continuous Persian Speech Recognition

Context-dependent modeling is a widely used technique for better phone modeling in continuous speech recognition. While different types of context-dependent models have been used, triphones have been known as the most effective ones. In this paper, a Maximum a Posteriori (MAP) estimation approach has been used to estimate the parameters of the untied triphone model set used in data-driven clust...

متن کامل

Neural network language models for conversational speech recognition

Recently there is growing interest in using neural networks for language modeling. In contrast to the well known backoff ngram language models (LM), the neural network approach tries to limit problems from the data sparseness by performing the estimation in a continuous space, allowing by these means smooth interpolations. Therefore this type of LM is interesting for tasks for which only a very...

متن کامل

Scalable Recurrent Neural Network Language Models for Speech Recognition

Language Modelling is a crucial component in many areas and applications including automatic speech recognition (ASR). n-gram language models (LMs) have been the dominant technology during the last few decades, due to their easy implementation and good generalism on unseen data. However, there are two well known problems with n-gram LMs: data sparsity; and the n-order Markov assumption. Previou...

متن کامل

Neural Network-based Language Model for Conversational Telephone Speech Recognition

Preface This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration except where specifically indicated in the text. I hereby declare that my thesis does not exceed the limit of length prescribed in the Special Regulations of the M. Phil. examination for which I am a candidate. The length of my thesis is 14980 words. Acknowledgements I ...

متن کامل

Investigating Bidirectional Recurrent Neural Network Language Models for Speech Recognition

Recurrent neural network language models (RNNLMs) are powerful language modeling techniques. Significant performance improvements have been reported in a range of tasks including speech recognition compared to n-gram language models. Conventional n-gram and neural network language models are trained to predict the probability of the next word given its preceding context history. In contrast, bi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE/ACM transactions on audio, speech, and language processing

سال: 2022

ISSN: ['2329-9304', '2329-9290']

DOI: https://doi.org/10.1109/taslp.2022.3203891